Goto

Collaborating Authors

 arrival time




Less is More: Non-uniform Road Segments are Efficient for Bus Arrival Prediction

Huang, Zhen, Deng, Jiaxin, Xu, Jiayu, Pang, Junbiao, Yu, Haitao

arXiv.org Artificial Intelligence

Abstract--In bus arrival time prediction, the process of organizing road infrastructure network data into homogeneous entities is known as segmentation. Segmenting a road network is widely recognized as the first and most critical step in developing an arrival time prediction system, particularly for auto-regressive-based approaches. Traditional methods typically employ a uniform segmentation strategy, which fails to account for varying physical constraints along roads, such as road conditions, intersections, and points of interest, thereby limiting prediction efficiency. In this paper, we propose a Reinforcement Learning (RL)-based approach to efficiently and adaptively learn non-uniform road segments for arrival time prediction. Our method decouples the prediction process into two stages: 1) Nonuniform road segments are extracted based on their impact scores using the proposed RL framework; and 2) A linear prediction model is applied to the selected segments to make predictions. This method ensures optimal segment selection while maintaining computational efficiency, offering a significant improvement over traditional uniform approaches. Furthermore, our experimental results suggest that the linear approach can even achieve better performance than more complex methods. Extensive experiments demonstrate the superiority of the proposed method, which not only enhances efficiency but also improves learning performance on large-scale benchmarks.


Infinite Hidden Semi-Markov Modulated Interaction Point Process

Neural Information Processing Systems

The correlation between events is ubiquitous and important for temporal events modelling. In many cases, the correlation exists between not only events' emitted observations, but also their arrival times. State space models (e.g., hidden Markov model) and stochastic interaction point process models (e.g., Hawkes process) have been studied extensively yet separately for the two types of correlations in the past. In this paper, we propose a Bayesian nonparametric approach that considers both types of correlations via unifying and generalizing hidden semi-Markov model and interaction point process model. The proposed approach can simultaneously model both the observations and arrival times of temporal events, and determine the number of latent states from data.




All reviewers

Neural Information Processing Systems

We would like to thank all the reviewers for their positive assessment of our work. We would like to politely disagree with your statement that "the proposed model seems to underperform the We found that increasing the block size beyond 16 for the TriTPP model did not improve the performance. In contrast, the RNN did benefit from larger hidden sizes (32 or 64). In the scalability experiment (Section 6.1) we made sure that both models have approximately the We found that stacking the nonlinearities (splines) did not improve the performance on the validation set. Thank you, we will run more experiments to see if this will allow us to improve the performance.


The Curse of Shared Knowledge: Recursive Belief Reasoning in a Coordination Game with Imperfect Information

Bolander, Thomas, Engelhardt, Robin, Nicolet, Thomas S.

arXiv.org Artificial Intelligence

Common knowledge is crucial for safe group coordination. In its absence, humans must rely on shared knowledge, which is inherently limited in depth and therefore prone to coordination failures, because any finite-order knowledge attribution allows for an even higher order attribution that may change what is known by whom. In three separate experiments involving 802 participants, we investigate the extent to which humans can differentiate between common knowledge and nth-order shared knowledge. We designed a two-person coordination game with imperfect information to simplify the recursive game structure and higher-order uncertainties into a relatable everyday scenario. In this game, coordination for the highest payoff requires a specific fact to be common knowledge between players. However, this fact cannot become common knowledge in the game. The fact can at most be nth-order shared knowledge for some n. Our findings reveal that even at quite shallow depths of shared knowledge (low values of n), players behave as though they possess common knowledge, and claim similar levels of certainty in their actions, despite incurring significant penalties when falsely assuming guaranteed coordination. We term this phenomenon 'the curse of shared knowledge'. It arises either from the players' inability to distinguish between higher-order shared knowledge and common knowledge, or from their implicit assumption that their co-player cannot make this distinction.


Action-Driven Processes for Continuous-Time Control

He, Ruimin, Lin, Shaowei

arXiv.org Machine Learning

Modeling systems that exhibit both continuous and discontinuous state changes presents a significant challenge in machine learning. For instance, biological spiking networks feature the continuous decay of neuron potentials alongside discontinuous spikes, which cause abrupt increases in the potentials of neighboring downstream neurons. Designing appropriate objective functions and applying gradient methods that work with these discontinuities are among the difficulties of working with such systems. Traditionally, ordinary and partial differential equations (ODEs and PDEs) are used to model continuous state changes, while Markov decision processes (MDPs) are employed to capture discrete actions that drive environmental transitions. In this paper, we study Action-Driven Processes (ADPs), also known as generalized semi-Markov processes [12, 5, 16], which unify both types of dynamics within a single framework. With continuous-time states and actions at the core of ADPs, a natural question is whether it is possible to learn optimal policies for action selection using traditional reinforcement learning methods. The control-as-inference tutorial [9] elegantly demonstrated that maximum entropy reinforcement learning can be formulated as minimizing the Kullback-Leibler (KL) divergence between (a) a true trajectory distribution generated by action-state transitions and the policy, and (b) a model trajectory distribution that depends on the reward function.


Hierarchical Optimization via LLM-Guided Objective Evolution for Mobility-on-Demand Systems

Zhang, Yi, Long, Yushen, Ni, Yun, Huang, Liping, Wang, Xiaohong, Liu, Jun

arXiv.org Artificial Intelligence

Online ride-hailing platforms aim to deliver efficient mobility-on-demand services, often facing challenges in balancing dynamic and spatially heterogeneous supply and demand. Existing methods typically fall into two categories: reinforcement learning (RL) approaches, which suffer from data inefficiency, oversimplified modeling of real-world dynamics, and difficulty enforcing operational constraints; or decomposed online optimization methods, which rely on manually designed high-level objectives that lack awareness of low-level routing dynamics. To address this issue, we propose a novel hybrid framework that integrates large language model (LLM) with mathematical optimization in a dynamic hierarchical system: (1) it is training-free, removing the need for large-scale interaction data as in RL, and (2) it leverages LLM to bridge cognitive limitations caused by problem decomposition by adaptively generating high-level objectives. Within this framework, LLM serves as a meta-optimizer, producing semantic heuristics that guide a low-level optimizer responsible for constraint enforcement and real-time decision execution. These heuristics are refined through a closed-loop evolutionary process, driven by harmony search, which iteratively adapts the LLM prompts based on feasibility and performance feedback from the optimization layer. Extensive experiments based on scenarios derived from both the New York and Chicago taxi datasets demonstrate the effectiveness of our approach, achieving an average improvement of 16% compared to state-of-the-art baselines.